Given an untrimmed video and natural language query, video sentence grounding aims to localize the target temporal moment in the video. Existing methods mainly tackle this task by matching and aligning semantics of the descriptive sentence and video segments on a single temporal resolution, while neglecting the temporal consistency of video content in different resolutions. In this work, we propose a novel multi-resolution temporal video sentence grounding network: MRTNet, which consists of a multi-modal feature encoder, a Multi-Resolution Temporal (MRT) module, and a predictor module. MRT module is an encoder-decoder network, and output features in the decoder part are in conjunction with Transformers to predict the final start and end timestamps. Particularly, our MRT module is hot-pluggable, which means it can be seamlessly incorporated into any anchor-free models. Besides, we utilize a hybrid loss to supervise cross-modal features in MRT module for more accurate grounding in three scales: frame-level, clip-level and sequence-level. Extensive experiments on three prevalent datasets have shown the effectiveness of MRTNet.
translated by 谷歌翻译
Traffic accident prediction in driving videos aims to provide an early warning of the accident occurrence, and supports the decision making of safe driving systems. Previous works usually concentrate on the spatial-temporal correlation of object-level context, while they do not fit the inherent long-tailed data distribution well and are vulnerable to severe environmental change. In this work, we propose a Cognitive Accident Prediction (CAP) method that explicitly leverages human-inspired cognition of text description on the visual observation and the driver attention to facilitate model training. In particular, the text description provides a dense semantic description guidance for the primary context of the traffic scene, while the driver attention provides a traction to focus on the critical region closely correlating with safe driving. CAP is formulated by an attentive text-to-vision shift fusion module, an attentive scene context transfer module, and the driver attention guided accident prediction module. We leverage the attention mechanism in these modules to explore the core semantic cues for accident prediction. In order to train CAP, we extend an existing self-collected DADA-2000 dataset (with annotated driver attention for each frame) with further factual text descriptions for the visual observations before the accidents. Besides, we construct a new large-scale benchmark consisting of 11,727 in-the-wild accident videos with over 2.19 million frames (named as CAP-DATA) together with labeled fact-effect-reason-introspection description and temporal accident frame label. Based on extensive experiments, the superiority of CAP is validated compared with state-of-the-art approaches. The code, CAP-DATA, and all results will be released in \url{https://github.com/JWFanggit/LOTVS-CAP}.
translated by 谷歌翻译
We investigate composed image retrieval with text feedback. Users gradually look for the target of interest by moving from coarse to fine-grained feedback. However, existing methods merely focus on the latter, i.e, fine-grained search, by harnessing positive and negative pairs during training. This pair-based paradigm only considers the one-to-one distance between a pair of specific points, which is not aligned with the one-to-many coarse-grained retrieval process and compromises the recall rate. In an attempt to fill this gap, we introduce a unified learning approach to simultaneously modeling the coarse- and fine-grained retrieval by considering the multi-grained uncertainty. The key idea underpinning the proposed method is to integrate fine- and coarse-grained retrieval as matching data points with small and large fluctuations, respectively. Specifically, our method contains two modules: uncertainty modeling and uncertainty regularization. (1) The uncertainty modeling simulates the multi-grained queries by introducing identically distributed fluctuations in the feature space. (2) Based on the uncertainty modeling, we further introduce uncertainty regularization to adapt the matching objective according to the fluctuation range. Compared with existing methods, the proposed strategy explicitly prevents the model from pushing away potential candidates in the early stage, and thus improves the recall rate. On the three public datasets, i.e., FashionIQ, Fashion200k, and Shoes, the proposed method has achieved +4.03%, + 3.38%, and + 2.40% Recall@50 accuracy over a strong baseline, respectively.
translated by 谷歌翻译
The rapid development of aspect-based sentiment analysis (ABSA) within recent decades shows great potential for real-world society. The current ABSA works, however, are mostly limited to the scenario of a single text piece, leaving the study in dialogue contexts unexplored. In this work, we introduce a novel task of conversational aspect-based sentiment quadruple analysis, namely DiaASQ, aiming to detect the sentiment quadruple of target-aspect-opinion-sentiment in a dialogue. DiaASQ bridges the gap between fine-grained sentiment analysis and conversational opinion mining. We manually construct a large-scale, high-quality Chinese dataset and also obtain the English version dataset via manual translation. We deliberately propose a neural model to benchmark the task. It advances in effectively performing end-to-end quadruple prediction and manages to incorporate rich dialogue-specific and discourse feature representations for better cross-utterance quadruple extraction. We finally point out several potential future works to facilitate the follow-up research of this new task. The DiaASQ data is open at https://github.com/unikcc/DiaASQ
translated by 谷歌翻译
视频问题回答(videoqa)是回答有关视频的自然语言问题的任务。产生答案需要了解有关视频和语言语义中的视觉场景之间的相互作用。但是,大多数领先的VideoQA模型都可以用作黑匣子,这使得在答案过程背后的视觉语言对齐变得晦涩难懂。这种黑框的自然要求可以视觉解释性,揭示了``视频的哪一部分应该考虑回答问题?''。只有少数作品以事后的方式呈现视觉解释,该解释通过其他方法模仿了目标模型的答案过程。尽管如此,仿真努力在回答过程中忠实地表现出视觉语言的结盟。我们专注于使答案过程透明的固有解释性,而不是事后解释性。从本质上讲,关键问题的线索是作为因果场景提供答案的原因,同时推出了问题的信息作为环境场景。从因果关系看VideoQA,我们设计了一个自我解释的框架,对可解释的VideoQA(EIGV)的刻度和不变的基础。具体而言,模棱两可的基础鼓励答案对因果场景和问题的语义变化敏感。相比之下,不变的接地强迫答案对环境场景的变化不敏感。通过将它们强加于答案过程,EIGV能够将因果场景与环境信息区分开,并明确介绍视觉语言的一致性。在三个基准数据集上进行的广泛实验证明了EIGV的准确性和视觉解释性优于领先基线的优势。
translated by 谷歌翻译
文档视觉问题回答(VQA)旨在了解视觉上富裕的文档,以自然语言回答问题,这是自然语言处理和计算机视觉的新兴研究主题。在这项工作中,我们介绍了一个名为TAT-DQA的新文档VQA数据集,该数据集由3,067个文档页面组成,其中包含半结构化表和非结构化文本以及16,558个问答,通过扩展Tat-QA Dataset。这些文档是从现实世界中的财务报告中取样的,并包含大量数字,这意味着要求离散的推理能力回答该数据集上的问题。基于TAT-DQA,我们进一步开发了一个名为MHST的新型模型,该模型在包括文本,布局和视觉图像在内的多模式中考虑了信息,以智能地以相应的策略(即提取或推理)智能地解决不同类型的问题。广泛的实验表明,MHST模型明显优于基线方法,证明其有效性。但是,表演仍然远远落后于专家人类。我们预计,我们的新Tat-DQA数据集将有助于研究对视觉和语言结合的视觉丰富文档的深入理解,尤其是对于需要离散推理的场景。另外,我们希望拟议的模型能够激发研究人员将来设计更高级的文档VQA模型。
translated by 谷歌翻译
本文提出了一个视频图形变压器(VGT)模型,用于视频Quetion Answering(VideoQA)。 VGT的唯一性是双重的:1)它设计了一个动态图形变压器模块,该模块通过明确捕获视觉对象,它们的关系和动态来编码视频,以进行复杂的时空推理; 2)它利用了删除的视频和文本变压器,以比较视频和文本以执行质量检查,而不是纠缠的跨模式变压器进行答案分类。视觉文本通信是通过其他跨模式相互作用模块完成的。借助更合理的视频编码和质量检查解决方案,我们表明VGT可以在挑战动态关系推理的视频中取得更好的性能,而不是在没有预处理的情况下。它的性能甚至超过了那些被数百万个外部数据鉴定的模型。我们进一步表明,VGT也可以从自我监督的交叉模式预处理中受益匪浅,但数据的数量级较小。这些结果清楚地表明了VGT的有效性和优势,并揭示了其具有更高数据预处理的潜力。通过全面的分析和一些启发式观察,我们希望VGT能够在现实视频中促进VQA研究超越粗略的认识/描述,以实现细粒度的关系推理。我们的代码可在https://github.com/sail-sg/vgt上找到。
translated by 谷歌翻译
瀑布推荐系统(RS)是移动应用程序中RS的流行形式,是推荐的项目流,这些项目由连续页面组成,可以通过滚动浏览。在Waterfall RS中,当用户完成浏览页面时,Edge(例如,手机)将向Cloud Server发送请求,以获取新的建议页面,称为分页请求机制。 RSS通常将大量项目放入一页中,以减少众多分页请求中的过度资源消耗,但是,这将降低RSS根据用户的实时兴趣及时续订建议的能力,并导致贫穷的用户。经验。直观地,在页面内插入其他请求以更新频率的建议可以减轻问题。但是,以前的尝试,包括非自适应策略(例如,统一插入请求)最终会导致资源过度消费。为此,我们设想了一项名为智能请求策略设计(IRSD)的Edge Intelligence的新学习任务。它旨在通过根据用户的实时意图确定请求插入的适当情况来提高瀑布RSS的有效性。此外,我们提出了一种新的自适应请求插入策略的范式,名为基于Uplift的On-Ending Smart请求框架(AdareQuest)。 AdareQuest 1)通过将实时行为与基于基于注意力的神经网络相匹配的历史兴趣来捕获用户意图的动态变化。 2)估计根据因果推理插入的请求带来的用户购买的反事实提升。 3)通过在在线资源约束下最大化效用功能来确定最终请求插入策略。我们在离线数据集和在线A/B测试上进行了广泛的实验,以验证AdareQuest的有效性。
translated by 谷歌翻译
领先的图对比度学习(GCL)方法在两个时尚中执行图形增强:(1)随机损坏锚图,这可能会导致语义信息的丢失,或(2)使用域知识维护显着特征,这破坏了对概括的概括其他域。从不变性看GCL时,我们认为高性能的增强应保留有关实例歧视的锚图的显着语义。为此,我们将GCL与不变的理由发现联系起来,并提出了一个新的框架,即理由吸引图形对比度学习(RGCL)。具体而言,没有监督信号,RGCL使用基本原理生成器来揭示有关图形歧视的显着特征作为理由,然后为对比度学习创建理由吸引的视图。这种理由意识到的预训练方案赋予了骨干模型具有强大的表示能力,从而进一步促进了下游任务的微调。在MNIST-SUPERPIXEL和MUTAG数据集上,对发现的理由的视觉检查展示了基本原理生成器成功捕获了显着特征(即区分图中的语义节点)。在生化分子和社交网络基准数据集上,RGCL的最新性能证明了理由意识到对比度学习的有效性。我们的代码可在https://github.com/lsh0520/rgcl上找到。
translated by 谷歌翻译
从观察数据中学习因果结构是机器学习的基本挑战。但是,大多数常用的可区分因果发现方法是不可识别的,这将此问题变成了容易发生数据偏差的连续优化任务。在许多现实生活中,数据是从不同环境中收集的,在不同的环境中,功能关系在整个环境中保持一致,而添加噪声的分布可能会有所不同。本文提出了可区分的因果发现(DICD),利用基于可区分框架的多环境信息,以避免学习虚假边缘和错误的因果方向。具体而言,DICD旨在在消除环境依赖性相关性的同时发现环境不变的因果关系。我们进一步制定了强制执行目标结构方程模型的约束,以在整个环境中保持最佳状态。在温和条件下提供了足够的环境,提供了针对拟议DICD的可识别性的理论保证。关于合成和现实世界数据集的广泛实验验证了DICD优于最先进的因果发现方法,而SHD中最高36%。我们的代码将是开源的。
translated by 谷歌翻译